The first week of the course covered basic tools and requirements for the rest of the weeks. I practiced basics of R by completing a Data Camp exercise called “R Short and Sweet”. In addition I brushed up git and rmarkdown by going through the first workshop as detailed below.
Note1: Something that was not mentioned in the course that could be useful or could be included in future. Rstudio works hand-in-hand with github so R scripts or Rmd documents can be directly linked to github. The parent directory of the course (locally downloaded to the PC) can be imported as a github project that can be authorized by one’s username and password.
keywords: github, linux, rstudio, rmarkdown
During the second week of this course, we have been delving deeper into R and statistics. We learned about regression models and the application of R in statistical modeling. The datacamp exercises along with the two embedded videos provided good background on the topics. Chapter three from “An Introduction to Statistical Learning with Applications in R” covered in-depth topics in linear regression.
After going through the study materials, I attempted the RStudio exercise. The first part of the exercise was related to data wrangling where a subset of table was generated from a table with raw data (observations). The R script used to create the table can be found here.
The R script for this part is available here. The data used in this exercise comes from an international survey of approaches to learning conducted by Kimmo Vehkalahti. The survey was funded by Teachers’ Academy funding (2013-2015) and the data was collected during December 2014 to January 2015. The survey was conducted in Finland with an aim to understand the relationship between learning approaches and students’ achievements in an introductory statistics course. A total of 183 individuals were included in the survey where the students were assessed for three different studying approaches - surface approach, deep approach and strategic approach. Additional details about the survey can be found here. After preprocessing in data wrangling step, we read the data into R and applied regression models.
lrn2014<-read.table("data/learning2014.csv")
str(lrn2014)
## 'data.frame': 166 obs. of 7 variables:
## $ gender : Factor w/ 2 levels "F","M": 1 2 1 2 2 1 2 1 2 1 ...
## $ age : int 53 55 49 53 49 38 50 37 37 42 ...
## $ attitude: num 3.7 3.1 2.5 3.5 3.7 3.8 3.5 2.9 3.8 2.1 ...
## $ deep : num 3.58 2.92 3.5 3.5 3.67 ...
## $ stra : num 3.38 2.75 3.62 3.12 3.62 ...
## $ surf : num 2.58 3.17 2.25 2.25 2.83 ...
## $ points : int 25 12 24 10 22 21 21 31 24 26 ...
dim(lrn2014)
## [1] 166 7
The final table used for analysis consist data including seven different variables and 166 individuals (see above). Among the variables, gender is a factor variable, age and points are integers whereas attitude, deep, stra and surf variables include float values.
summary(lrn2014)
## gender age attitude deep stra
## F:110 Min. :17.00 Min. :1.400 Min. :1.583 Min. :1.250
## M: 56 1st Qu.:21.00 1st Qu.:2.600 1st Qu.:3.333 1st Qu.:2.625
## Median :22.00 Median :3.200 Median :3.667 Median :3.188
## Mean :25.51 Mean :3.143 Mean :3.680 Mean :3.121
## 3rd Qu.:27.00 3rd Qu.:3.700 3rd Qu.:4.083 3rd Qu.:3.625
## Max. :55.00 Max. :5.000 Max. :4.917 Max. :5.000
## surf points
## Min. :1.583 Min. : 7.00
## 1st Qu.:2.417 1st Qu.:19.00
## Median :2.833 Median :23.00
## Mean :2.787 Mean :22.72
## 3rd Qu.:3.167 3rd Qu.:27.75
## Max. :4.333 Max. :33.00
The number of females (n=110) in this survey is almost two times the number of males (n=56). The age of students ranged from 17 years up to 55 years.
plot_lrn2014<-ggpairs(lrn2014, mapping = aes(col=gender, alpha = 0.3), lower=list(combo = wrap ("facethist", bins = 20)))
plot_lrn2014
The graphical overview of the data is shown above. Here, the overall goal of the survey is to identify how age of the students, attitude towards learning and three different learning methods are contributing towards the final points. In general, attitude towards learning has the highest impact for overall outcome of the study (i.e points scored) whereas deep learning method do not have any impact.
The explanatory variables were selected based on the absolute correlation values. The three explanatory variables for exam points (top correlated variables, also shown in the plot above) are student’s attitude towards learning (attitude), learning strategy (stra) and surface learning approach (surf). The model based on three dependent variables on exam points has the maximum residual value of 10.9 and median of 0.5. Here, residual value is the remaining value after the predicted value is substracted by the observed value. The model summary showed that attitude is highly significant (Pr=1.93e-08) variable that affects the student’s exam points. On the other hand, learning strategy and surface learning are not significant variables (Pr>0.01).
model<-lm(points ~ attitude + stra + surf, data = lrn2014)
summary(model)
##
## Call:
## lm(formula = points ~ attitude + stra + surf, data = lrn2014)
##
## Residuals:
## Min 1Q Median 3Q Max
## -17.1550 -3.4346 0.5156 3.6401 10.8952
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 11.0171 3.6837 2.991 0.00322 **
## attitude 3.3952 0.5741 5.913 1.93e-08 ***
## stra 0.8531 0.5416 1.575 0.11716
## surf -0.5861 0.8014 -0.731 0.46563
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 5.296 on 162 degrees of freedom
## Multiple R-squared: 0.2074, Adjusted R-squared: 0.1927
## F-statistic: 14.13 on 3 and 162 DF, p-value: 3.156e-08
The summary of the model after removing insignificant variables is shown below. With regard to multiple r-squared value, we observed slight decrease in the value from 0.1927 (in earlier model) to 0.1856 (in updated model). However, other criteria for model evaluation such as F-Statistic (from 14.13 to 38.61) and p-value(3.156e-08 to 4.119e-09) have significantly improved. Thus, we can conclude that r-squared value alone may not determine the quality of the model. In this particular case, the lower r-squared value could be due to the outliers in the data.
model_sig<-lm(points ~ attitude, data = lrn2014)
summary(model_sig)
##
## Call:
## lm(formula = points ~ attitude, data = lrn2014)
##
## Residuals:
## Min 1Q Median 3Q Max
## -16.9763 -3.2119 0.4339 4.1534 10.6645
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 11.6372 1.8303 6.358 1.95e-09 ***
## attitude 3.5255 0.5674 6.214 4.12e-09 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 5.32 on 164 degrees of freedom
## Multiple R-squared: 0.1906, Adjusted R-squared: 0.1856
## F-statistic: 38.61 on 1 and 164 DF, p-value: 4.119e-09
par(mfrow = c(2,2))
plot(model_sig, which = c(1,2,5))
The three different diagnostic plots are generated above.The assumptions behind all three models is linearity and normality. Based on the above plots, we can conclude that the errors are normally distributed (clearly observed in q-q plot). Similarly, residual versus fitted model showed that the errors are not dependent on the attitude variable. Moreover, we can see that even two points (towards the right) have minor influence to the assumption in case of residual vs leverage model. All the models have adressed the outliers nicely. Thus, assumptions in all models are more or less valid.
One way to move on from linear regression is to consider settings where the dependent (target) variable is discrete. This opens a wide range of possibilities for modelling phenomena beyond the assumptions of continuity or normality.
Logistic regression is a powerful method that is well suited for predicting and classifying data by working with probabilities. It belongs to a large family of statistical models called Generalized Linear Models (GLM). An important special case that involves a binary target (taking only the values 0 or 1) is the most typical and popular form of logistic regression.
We will learn the concept of odds ratio (OR), which helps to understand and interpret the estimated coefficients of a logistic regression model. We also take a brief look at cross-validation, an important principle and technique for assessing the performance of a statistical model with another data set, for example by splitting the data into a training set and a testing set.
The slides and videos related to logistic regression can be found below.
Video: Logistic regression: probability and odds
Video: Logistic regression: Odds ratios
Video: Cross-validation: training and testing sets
Slides: Logistic regression
After going through the videos, we practiced DataCamp exercises on Logistic regression and started to work on the workshop (RStudio Exercise 3.
The data for Exercise 3 was downloaded from UCI Machine Learning Repository (link). The zipped file contained two tables, namely student-mat.csv and student-por.csv. In this data wrangling exercise, the main task was to join two data sets and create a data frame for logistic regression analysis. More detailed information about the data is present in the next section (Data Analysis) of this exercise. The R script associated with this exercise can be found here
The joined student alcohol consumption data that was created during wrangling exercise was read into R.
alc<-read.table("data/alc.csv")
#head(alc)
colnames(alc)
## [1] "school" "sex" "age" "address" "famsize"
## [6] "Pstatus" "Medu" "Fedu" "Mjob" "Fjob"
## [11] "reason" "nursery" "internet" "guardian" "traveltime"
## [16] "studytime" "failures" "schoolsup" "famsup" "paid"
## [21] "activities" "higher" "romantic" "famrel" "freetime"
## [26] "goout" "Dalc" "Walc" "health" "absences"
## [31] "G1" "G2" "G3" "alc_use" "high_use"
The data set in this exercise is a collection of information that is associated with student’s performance in two Portugese high schools. Two subjects - Mathematics (mat) and Portugese language (por) were choosen in this study. The findings from this study were published in the Proceedings of 5th FUture BUsiness TEChnology Conference (FUBUTEC 2008) during April, 2008 in Porto, Protugal(link). Altogether 33 attributes were assessed covering different aspects of student’s life. More detailed attribute information can be found here
Here, the main goal of the analysis is to study how alcohol consumption is associated with other aspects in student’s life. After going through the background information, it is a bit easier to identify interesting variables that could be related to alcohol consumption. Personally, I believe that the following are the four interesting variables that are associated with alcohol consumption:
Weekly study time (studytime) : In my opinion, if a student spends more time studying, he will have less time for alcohol consumption.
Going out with friends (goout) : In general, students go out with friends for parties and get-togethers. Attending such partiies and gatherings will lead higher alcohol consumption compared to those who do not participate in such activities.
Number of school absences (absences) : We can think of two reasons in terms of alcohol consumption and school absences. The main reason is that when a student consumes alcohol (especially during the evening), he/she will have lesser desire to go school next day (depends on the level of consumption). Another reason might be that a student is absent from class because he has plan to drink alcholic beverages.
Quality of family relationships (famrel) : I think the quality of family relationship will also affect student’s attitude towards alcohol consumption and the student who has bad family relationship may consume more alcohol compare to one who has better family relationship.
In the following section, we will see in details how my hypotheses are explained by the data. First, let’s summarise the subset of the table which includes the variables I have chosen.
library(dplyr)
##
## Attaching package: 'dplyr'
## The following object is masked from 'package:GGally':
##
## nasa
## The following objects are masked from 'package:stats':
##
## filter, lag
## The following objects are masked from 'package:base':
##
## intersect, setdiff, setequal, union
my_var<- c("studytime", "absences", "goout", "famrel", "high_use")
my_var_data <- select(alc, one_of(my_var))
str(my_var_data)
## 'data.frame': 382 obs. of 5 variables:
## $ studytime: int 2 2 2 3 2 2 2 2 2 2 ...
## $ absences : int 5 3 8 1 2 8 0 4 0 0 ...
## $ goout : int 4 3 2 2 2 2 4 4 2 1 ...
## $ famrel : int 4 5 4 3 4 5 4 4 4 5 ...
## $ high_use : logi FALSE FALSE TRUE FALSE FALSE FALSE ...
All my chosen variables have integer values whereas information about the alcohol consumption is logical i.e True or False. Moreover, the str function also revealed the dimension of the selected data i.e 382 observations for five variables. After getting the data types, we can proceed with summarizing the table as follows:
summary(my_var_data)
## studytime absences goout famrel
## Min. :1.000 Min. : 0.0 Min. :1.000 Min. :1.000
## 1st Qu.:1.000 1st Qu.: 1.0 1st Qu.:2.000 1st Qu.:4.000
## Median :2.000 Median : 3.0 Median :3.000 Median :4.000
## Mean :2.037 Mean : 4.5 Mean :3.113 Mean :3.937
## 3rd Qu.:2.000 3rd Qu.: 6.0 3rd Qu.:4.000 3rd Qu.:5.000
## Max. :4.000 Max. :45.0 Max. :5.000 Max. :5.000
## high_use
## Mode :logical
## FALSE:268
## TRUE :114
##
##
##
The summary provides basic statistics (see the table above) about each variables. If we pick a particular variable absences (i.e number of absences), we can see that some of the students are never absent (min = 0)in the class whereas there have been a student or two who was absent upto 45 days (max = 0). Overall, when we look at the summary of all variables, median vlaues reflect better than mean values to understand the natures of our hypotheses i.e more vs less (studytime and absences, goout), high vs low, good vs bad (famrel). According to that, studying more than three hours, going out more than three times a week, being absent in class more than 3 days a week and having a relationship scale of more than 4 lead the students to upper levels and vice versa.
We can have graphical representation of each of the variables as bar charts (see below).
library(tidyr)
library(ggplot2)
gather(my_var_data) %>% ggplot(aes(value)) + facet_wrap("key", scales = "free") + geom_bar()
The summary tables provide information for alcohol consumption in relation to the factors for each selected variables.
t1 <- table("Study Time" = alc$studytime, "Alcohol Usage" = alc$high_use)
round(prop.table(t1, 1)*100, 1)
## Alcohol Usage
## Study Time FALSE TRUE
## 1 58.0 42.0
## 2 69.2 30.8
## 3 86.7 13.3
## 4 85.2 14.8
t2 <- table("Going Out" = alc$goout, "Alcohol Usage" = alc$high_use)
round(prop.table(t2, 1)*100, 1)
## Alcohol Usage
## Going Out FALSE TRUE
## 1 86.4 13.6
## 2 84.0 16.0
## 3 81.7 18.3
## 4 50.6 49.4
## 5 39.6 60.4
t3 <- table("Absences" = alc$absences, "Alcohol Usage" = alc$high_use)
round(prop.table(t3, 1)*100, 1)
## Alcohol Usage
## Absences FALSE TRUE
## 0 80.0 20.0
## 1 74.5 25.5
## 2 72.4 27.6
## 3 80.5 19.5
## 4 66.7 33.3
## 5 72.7 27.3
## 6 76.2 23.8
## 7 75.0 25.0
## 8 70.0 30.0
## 9 50.0 50.0
## 10 71.4 28.6
## 11 33.3 66.7
## 12 50.0 50.0
## 13 50.0 50.0
## 14 14.3 85.7
## 16 0.0 100.0
## 17 0.0 100.0
## 18 50.0 50.0
## 19 0.0 100.0
## 20 100.0 0.0
## 21 50.0 50.0
## 26 0.0 100.0
## 27 0.0 100.0
## 29 0.0 100.0
## 44 0.0 100.0
## 45 100.0 0.0
t4 <- table("Family Relationship" = alc$famrel, "Alcohol Usage" = alc$high_use)
round(prop.table(t4, 1)*100, 1)
## Alcohol Usage
## Family Relationship FALSE TRUE
## 1 75.0 25.0
## 2 52.6 47.4
## 3 60.9 39.1
## 4 71.4 28.6
## 5 76.5 23.5
Box plots provide more meaningful and summarized but more descriptive information for our variables as we can see the relationship of each four variables compared to alcohol consumption. Let’s look into more detail how the four variables I chose are affecting alcohol consumption in high school students using box plots.
library(ggpubr)
## Loading required package: magrittr
##
## Attaching package: 'magrittr'
## The following object is masked from 'package:tidyr':
##
## extract
g1 <- ggplot(alc, aes(x = high_use, y = studytime, col = high_use))
p1=g1 + geom_boxplot() + xlab("Alcohol Consumption")+ ylab("Study Time") + ggtitle("Study hours and alcohol consumption")
g2 <- ggplot(alc, aes(x = high_use, y = absences, col = high_use))
p2=g2 + geom_boxplot() + xlab("Alcohol Consumption")+ ylab("Number of School Absences") + ggtitle("School absences and alcohol consumption")
g3 <- ggplot(alc, aes(x = high_use, y = goout, col = high_use))
p3=g3 + geom_boxplot() + xlab("Alcohol Consumption")+ ylab("Going Out With Friends") + ggtitle("Going out with friends and alcohol consumption")
g4 <- ggplot(alc, aes(x = high_use, y = famrel, col = high_use))
p4=g4 + geom_boxplot() + xlab("Alcohol Consumption")+ ylab("Quality Family Relationship") + ggtitle("Family relationship and alcohol consumption")
ggarrange(p1, p2, p3 , p4, labels = c("A", "B", "C", "D"), ncol = 2, nrow = 2)
The four box plots above show how four of the chosen variables are associated with alcohol consumption. In each of the plots, x-axis shows the two different factors that measure the level of alcohol consumption i.e True for high consumption and False for low consumption and y-axis shows measurements for dependent variables i.e the four variables I have chosen. All of the box plots above show that what I hypothesized about the variables I selected in terms of alcohol consumption seem to be valid. We can see, they are valid but are these significantly valid observations? I will do a series of modeling and validations in the following sections.
Logistic Regression
Now we will do logistic regression where alcohol consumption (high_use) is target variable and four variables (studytime, goout, absences, famrel) I selected are the predictors.
m<-glm(high_use ~ studytime + goout + absences + famrel, data = alc, family = "binomial")
summary(m)
##
## Call:
## glm(formula = high_use ~ studytime + goout + absences + famrel,
## family = "binomial", data = alc)
##
## Deviance Residuals:
## Min 1Q Median 3Q Max
## -1.8701 -0.7738 -0.5019 0.8042 2.5416
##
## Coefficients:
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) -1.28606 0.70957 -1.812 0.06992 .
## studytime -0.55089 0.16789 -3.281 0.00103 **
## goout 0.75953 0.12041 6.308 2.82e-10 ***
## absences 0.06753 0.02175 3.104 0.00191 **
## famrel -0.33699 0.13681 -2.463 0.01377 *
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## (Dispersion parameter for binomial family taken to be 1)
##
## Null deviance: 465.68 on 381 degrees of freedom
## Residual deviance: 384.07 on 377 degrees of freedom
## AIC: 394.07
##
## Number of Fisher Scoring iterations: 4
Among four variables, going out with friends (goout) is strongly coorelated (Pr = 2.82e-10) with alcohol consumption whereas quality of the family relationship (famrel) has comparatively lower impact towards alcohol consumption. Moreover, all four variables are significantly associated with alcohol consumption. Out of four variables, weekly study time (studytime) and the quality of family relationship (famrel) are inversely related to alcohol consumption. In other words, the more the number of hours spent in studying and the better the quality of the samily relationship, the lower the alcohol consumption. On the other hand, the number of school absences and frequency of going out with friends is positively correlated with alcohol consumption. This means that, if a student has higher number of school absences and goes out frequently with friends, his alcohol consumption is higher.
I will further delve into my model by evaluating it in terms of coefficients, odds ratio and confidence intervals.
Coef<-coef(m)
OR<-Coef %>% exp
CI<-confint(m) %>% exp
## Waiting for profiling to be done...
cbind(Coef, OR, CI)
## Coef OR 2.5 % 97.5 %
## (Intercept) -1.28606058 0.2763573 0.06723596 1.0961732
## studytime -0.55089391 0.5764343 0.41040872 0.7941804
## goout 0.75953025 2.1372720 1.69853389 2.7261579
## absences 0.06753071 1.0698631 1.02591583 1.1187950
## famrel -0.33699130 0.7139151 0.54460646 0.9331198
In general, If odds ratio is greater than 1, increase in explanotary variable will increase the response probability p whereas if odds ratio is less than 1, then increase in explanatory varialbe will decrease the response probability p. And if odds ratio equals to 1 then there is no effect of explanatory variable on response variable. According to these statements on odds ratios, the frequency of going out and the number of school absences have positive association with high alcohol usage. On the other hand, study time and family relationship seem to be negatively associated with high alcohol usage, because their odds ratio are smaller than one. With regards to confidence interval, the odds ratio for going out (goout) has the widest confidence interval i.e 2.73 (97.5%) and the study time has the narrowest confidence interval of 0.79 (97.5%). As none of the odds ratio have confidence interval of 1, we can claim that all of the explanatory variables have effect on the odds of outcome i.e. high alcohol usage.
Exploring predictive power of the model
To get insight into the predictive power of my model, I will compare model’s prediction with the probability of actual vaules.
#predict the probability
pred_prob <- predict(m, type = "response")
#add predicted probabilities and mutate the model
alc<-mutate(alc, probability = pred_prob)
#we will use the probability to validate the probabilities of choses four variables
alc<-mutate(alc, prediction = probability > 0.5)
table(high_use = alc$high_use, prediction = alc$prediction)
## prediction
## high_use FALSE TRUE
## FALSE 242 26
## TRUE 65 49
Based on the 2x2 cross-tabulation, we can see that my model predicted 65 false positives and 26 true negatives. In other words, prediction for a total of (65+26) 91 students is not true. To be more precise, we can check the overall percentage that the model is giving wrong prediction.
#first we need to define loss function
LF<-function(class, prob){
n_wrong <- abs(class - prob) > 0.5
mean(n_wrong)
}
#now we compute the average number of wrong predictions
LF(alc$high_use, alc$prob)
## [1] 0.2382199
Now we can say that up to 24% of the predictions made by my model are false.
Cross Validation
Here we will perform 10-fold cross validation of our model
#load required library
library(boot)
CV<-cv.glm(data = alc, cost = LF, glmfit = m, K = 10 )
#finally look at the average number of wrong predictions
CV$delta[1]
## [1] 0.2408377
Now, after performing 10-fold cross validation, the perfomance of my model slightly increased. And yes, I can proudly claim that my model has better test set performance (24%) than the one we practised in data camp exercise(26%).
The list of materials and links related to clustering and classification can be found below.
course slides by Emma Kämäräinen
DataCamp exercise
After solving the DataCamp exercise and going through the embedded links, I got a general overview on the topic. In the following sections, I will prepare a report based on the exercise instructions.
Data
In this exercise, I will be using Boston data from MASS package.
library(MASS)
##
## Attaching package: 'MASS'
## The following object is masked from 'package:dplyr':
##
## select
data(Boston)
str(Boston)
## 'data.frame': 506 obs. of 14 variables:
## $ crim : num 0.00632 0.02731 0.02729 0.03237 0.06905 ...
## $ zn : num 18 0 0 0 0 0 12.5 12.5 12.5 12.5 ...
## $ indus : num 2.31 7.07 7.07 2.18 2.18 2.18 7.87 7.87 7.87 7.87 ...
## $ chas : int 0 0 0 0 0 0 0 0 0 0 ...
## $ nox : num 0.538 0.469 0.469 0.458 0.458 0.458 0.524 0.524 0.524 0.524 ...
## $ rm : num 6.58 6.42 7.18 7 7.15 ...
## $ age : num 65.2 78.9 61.1 45.8 54.2 58.7 66.6 96.1 100 85.9 ...
## $ dis : num 4.09 4.97 4.97 6.06 6.06 ...
## $ rad : int 1 2 2 3 3 3 5 5 5 5 ...
## $ tax : num 296 242 242 222 222 222 311 311 311 311 ...
## $ ptratio: num 15.3 17.8 17.8 18.7 18.7 18.7 15.2 15.2 15.2 15.2 ...
## $ black : num 397 397 393 395 397 ...
## $ lstat : num 4.98 9.14 4.03 2.94 5.33 ...
## $ medv : num 24 21.6 34.7 33.4 36.2 28.7 22.9 27.1 16.5 18.9 ...
dim(Boston)
## [1] 506 14
The Boston data was collected to study the housing values in the suburbs of Boston. The table contains 506 observations for 14 different variables. The descriptions for each of the 14 variables are listed below.
| Variables | Description |
|---|---|
| crim | per capita crime rate by town. |
| zn | proportion of residential land zoned for lots over 25,000 sq.ft. |
| indus | proportion of non-retail business acres per town. |
| chas | Charles River dummy variable (= 1 if tract bounds river; 0 otherwise). |
| nox | nitrogen oxides concentration (parts per 10 million). |
| rm | average number of rooms per dwelling. |
| age | proportion of owner-occupied units built prior to 1940. |
| dis | weighted mean of distances to five Boston employment centres. |
| rad | index of accessibility to radial highways. |
| tax | full-value property-tax rate per $10,000. |
| ptratio | pupil-teacher ratio by town. |
| black | 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town. |
| lstat | lower status of the population (percent). |
| medv | median value of owner-occupied homes in $1000s. |
Data Summary
Now, let’s look at the summary of the boston data in the form of table (instead of default layout) using pandoc.table function of pander package.
library(pander)
##
## Attaching package: 'pander'
## The following object is masked from 'package:GGally':
##
## wrap
pandoc.table(summary(Boston), caption = "Summary of Boston data", split.table = 120)
##
## -----------------------------------------------------------------------------------------------------------------------
## crim zn indus chas nox rm age
## ------------------ ---------------- --------------- ----------------- ---------------- --------------- ----------------
## Min. : 0.00632 Min. : 0.00 Min. : 0.46 Min. :0.00000 Min. :0.3850 Min. :3.561 Min. : 2.90
##
## 1st Qu.: 0.08204 1st Qu.: 0.00 1st Qu.: 5.19 1st Qu.:0.00000 1st Qu.:0.4490 1st Qu.:5.886 1st Qu.: 45.02
##
## Median : 0.25651 Median : 0.00 Median : 9.69 Median :0.00000 Median :0.5380 Median :6.208 Median : 77.50
##
## Mean : 3.61352 Mean : 11.36 Mean :11.14 Mean :0.06917 Mean :0.5547 Mean :6.285 Mean : 68.57
##
## 3rd Qu.: 3.67708 3rd Qu.: 12.50 3rd Qu.:18.10 3rd Qu.:0.00000 3rd Qu.:0.6240 3rd Qu.:6.623 3rd Qu.: 94.08
##
## Max. :88.97620 Max. :100.00 Max. :27.74 Max. :1.00000 Max. :0.8710 Max. :8.780 Max. :100.00
## -----------------------------------------------------------------------------------------------------------------------
##
## Table: Summary of Boston data (continued below)
##
##
## ------------------------------------------------------------------------------------------------------------------
## dis rad tax ptratio black lstat medv
## ---------------- ---------------- --------------- --------------- ---------------- --------------- ---------------
## Min. : 1.130 Min. : 1.000 Min. :187.0 Min. :12.60 Min. : 0.32 Min. : 1.73 Min. : 5.00
##
## 1st Qu.: 2.100 1st Qu.: 4.000 1st Qu.:279.0 1st Qu.:17.40 1st Qu.:375.38 1st Qu.: 6.95 1st Qu.:17.02
##
## Median : 3.207 Median : 5.000 Median :330.0 Median :19.05 Median :391.44 Median :11.36 Median :21.20
##
## Mean : 3.795 Mean : 9.549 Mean :408.2 Mean :18.46 Mean :356.67 Mean :12.65 Mean :22.53
##
## 3rd Qu.: 5.188 3rd Qu.:24.000 3rd Qu.:666.0 3rd Qu.:20.20 3rd Qu.:396.23 3rd Qu.:16.95 3rd Qu.:25.00
##
## Max. :12.127 Max. :24.000 Max. :711.0 Max. :22.00 Max. :396.90 Max. :37.97 Max. :50.00
## ------------------------------------------------------------------------------------------------------------------
After getting a summary of the data, it’s worthwhile to get graphical representation. This time we will make a correlogram, a graphical representation of coorelation matrix. The corrplot function of corrplot package wll be used.
library(corrplot)
## corrplot 0.84 loaded
library(dplyr)
corr_boston<-cor(Boston) %>% round(2)
pandoc.table(corr_boston, split.table = 120)
##
## -------------------------------------------------------------------------------------------------------------------------------
## crim zn indus chas nox rm age dis rad tax ptratio black lstat medv
## ------------- ------- ------- ------- ------- ------- ------- ------- ------- ------- ------- --------- ------- ------- -------
## **crim** 1 -0.2 0.41 -0.06 0.42 -0.22 0.35 -0.38 0.63 0.58 0.29 -0.39 0.46 -0.39
##
## **zn** -0.2 1 -0.53 -0.04 -0.52 0.31 -0.57 0.66 -0.31 -0.31 -0.39 0.18 -0.41 0.36
##
## **indus** 0.41 -0.53 1 0.06 0.76 -0.39 0.64 -0.71 0.6 0.72 0.38 -0.36 0.6 -0.48
##
## **chas** -0.06 -0.04 0.06 1 0.09 0.09 0.09 -0.1 -0.01 -0.04 -0.12 0.05 -0.05 0.18
##
## **nox** 0.42 -0.52 0.76 0.09 1 -0.3 0.73 -0.77 0.61 0.67 0.19 -0.38 0.59 -0.43
##
## **rm** -0.22 0.31 -0.39 0.09 -0.3 1 -0.24 0.21 -0.21 -0.29 -0.36 0.13 -0.61 0.7
##
## **age** 0.35 -0.57 0.64 0.09 0.73 -0.24 1 -0.75 0.46 0.51 0.26 -0.27 0.6 -0.38
##
## **dis** -0.38 0.66 -0.71 -0.1 -0.77 0.21 -0.75 1 -0.49 -0.53 -0.23 0.29 -0.5 0.25
##
## **rad** 0.63 -0.31 0.6 -0.01 0.61 -0.21 0.46 -0.49 1 0.91 0.46 -0.44 0.49 -0.38
##
## **tax** 0.58 -0.31 0.72 -0.04 0.67 -0.29 0.51 -0.53 0.91 1 0.46 -0.44 0.54 -0.47
##
## **ptratio** 0.29 -0.39 0.38 -0.12 0.19 -0.36 0.26 -0.23 0.46 0.46 1 -0.18 0.37 -0.51
##
## **black** -0.39 0.18 -0.36 0.05 -0.38 0.13 -0.27 0.29 -0.44 -0.44 -0.18 1 -0.37 0.33
##
## **lstat** 0.46 -0.41 0.6 -0.05 0.59 -0.61 0.6 -0.5 0.49 0.54 0.37 -0.37 1 -0.74
##
## **medv** -0.39 0.36 -0.48 0.18 -0.43 0.7 -0.38 0.25 -0.38 -0.47 -0.51 0.33 -0.74 1
## -------------------------------------------------------------------------------------------------------------------------------
corrplot(corr_boston, method = "circle", tl.col = "black", type = "upper" , tl.cex = 0.9 )
In the graph, positive correlations are displayed in blue and negative correlations in red color with intensity of the color and circle size being proportional to the correlation coefficients.
Data Standardization
boston_scaled<-scale(Boston)
pandoc.table(summary(Boston), split.table = 120)
##
## -----------------------------------------------------------------------------------------------------------------------
## crim zn indus chas nox rm age
## ------------------ ---------------- --------------- ----------------- ---------------- --------------- ----------------
## Min. : 0.00632 Min. : 0.00 Min. : 0.46 Min. :0.00000 Min. :0.3850 Min. :3.561 Min. : 2.90
##
## 1st Qu.: 0.08204 1st Qu.: 0.00 1st Qu.: 5.19 1st Qu.:0.00000 1st Qu.:0.4490 1st Qu.:5.886 1st Qu.: 45.02
##
## Median : 0.25651 Median : 0.00 Median : 9.69 Median :0.00000 Median :0.5380 Median :6.208 Median : 77.50
##
## Mean : 3.61352 Mean : 11.36 Mean :11.14 Mean :0.06917 Mean :0.5547 Mean :6.285 Mean : 68.57
##
## 3rd Qu.: 3.67708 3rd Qu.: 12.50 3rd Qu.:18.10 3rd Qu.:0.00000 3rd Qu.:0.6240 3rd Qu.:6.623 3rd Qu.: 94.08
##
## Max. :88.97620 Max. :100.00 Max. :27.74 Max. :1.00000 Max. :0.8710 Max. :8.780 Max. :100.00
## -----------------------------------------------------------------------------------------------------------------------
##
## Table: Table continues below
##
##
## ------------------------------------------------------------------------------------------------------------------
## dis rad tax ptratio black lstat medv
## ---------------- ---------------- --------------- --------------- ---------------- --------------- ---------------
## Min. : 1.130 Min. : 1.000 Min. :187.0 Min. :12.60 Min. : 0.32 Min. : 1.73 Min. : 5.00
##
## 1st Qu.: 2.100 1st Qu.: 4.000 1st Qu.:279.0 1st Qu.:17.40 1st Qu.:375.38 1st Qu.: 6.95 1st Qu.:17.02
##
## Median : 3.207 Median : 5.000 Median :330.0 Median :19.05 Median :391.44 Median :11.36 Median :21.20
##
## Mean : 3.795 Mean : 9.549 Mean :408.2 Mean :18.46 Mean :356.67 Mean :12.65 Mean :22.53
##
## 3rd Qu.: 5.188 3rd Qu.:24.000 3rd Qu.:666.0 3rd Qu.:20.20 3rd Qu.:396.23 3rd Qu.:16.95 3rd Qu.:25.00
##
## Max. :12.127 Max. :24.000 Max. :711.0 Max. :22.00 Max. :396.90 Max. :37.97 Max. :50.00
## ------------------------------------------------------------------------------------------------------------------
Next, we will create quantile vector for crime
boston_scaled<- data.frame(boston_scaled)
qvc<-quantile(boston_scaled$crim)
crime <- cut(boston_scaled$crim, breaks = qvc, label = c("low", "med_low", "med_high", "high"), include.lowest = TRUE)
boston_scaled <- dplyr::select(boston_scaled, -crim)
boston_scaled<-data.frame(boston_scaled, crime)
table(boston_scaled$crime)
##
## low med_low med_high high
## 127 126 126 127
Linear Discriminant Analysis
library(MASS)
n<-nrow(boston_scaled)
ind <- sample(n, size = n*0.8)
train <- boston_scaled[ind,]
test <- boston_scaled[-ind,]
lda.fit <- lda(crime ~ ., data = train)
# target classes as numeric
classes <- as.numeric(train$crime)
# plot the lda results
plot(lda.fit, dimen = 2, col = classes, pch = classes)
Class Prediction
crime_cat<-test$crime
test<-dplyr::select(test, -crime)
lda.pred<-predict(lda.fit, newdata = test)
table(correct = crime_cat, predicted = lda.pred$class)
## predicted
## correct low med_low med_high high
## low 20 7 2 0
## med_low 9 18 4 0
## med_high 1 4 16 0
## high 0 0 0 21
K-means Clustering
data(Boston)
boston_scaled1<-as.data.frame(scale(Boston))
dist_eu<-dist(boston_scaled1)
summary(dist_eu)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 0.1343 3.4625 4.8241 4.9111 6.1863 14.3970
head(boston_scaled1)
## crim zn indus chas nox rm
## 1 -0.4193669 0.2845483 -1.2866362 -0.2723291 -0.1440749 0.4132629
## 2 -0.4169267 -0.4872402 -0.5927944 -0.2723291 -0.7395304 0.1940824
## 3 -0.4169290 -0.4872402 -0.5927944 -0.2723291 -0.7395304 1.2814456
## 4 -0.4163384 -0.4872402 -1.3055857 -0.2723291 -0.8344581 1.0152978
## 5 -0.4120741 -0.4872402 -1.3055857 -0.2723291 -0.8344581 1.2273620
## 6 -0.4166314 -0.4872402 -1.3055857 -0.2723291 -0.8344581 0.2068916
## age dis rad tax ptratio black
## 1 -0.1198948 0.140075 -0.9818712 -0.6659492 -1.4575580 0.4406159
## 2 0.3668034 0.556609 -0.8670245 -0.9863534 -0.3027945 0.4406159
## 3 -0.2655490 0.556609 -0.8670245 -0.9863534 -0.3027945 0.3960351
## 4 -0.8090878 1.076671 -0.7521778 -1.1050216 0.1129203 0.4157514
## 5 -0.5106743 1.076671 -0.7521778 -1.1050216 0.1129203 0.4406159
## 6 -0.3508100 1.076671 -0.7521778 -1.1050216 0.1129203 0.4101651
## lstat medv
## 1 -1.0744990 0.1595278
## 2 -0.4919525 -0.1014239
## 3 -1.2075324 1.3229375
## 4 -1.3601708 1.1815886
## 5 -1.0254866 1.4860323
## 6 -1.0422909 0.6705582
Inspired by this R-blogger post and this Stackoverflow question
First we start with random cluster number. Let’s start with k=4 and apply k-means on the data.
#Let us apply kmeans for k=4 clusters
kmm = kmeans(boston_scaled1,6,nstart = 50 ,iter.max = 15) #we keep number of iter.max=15 to ensure the algorithm converges and nstart=50 to ensure that atleat 50 random sets are choosen
kmm
## K-means clustering with 6 clusters of sizes 34, 48, 81, 39, 185, 119
##
## Cluster means:
## crim zn indus chas nox rm
## 1 -0.1985497 -0.2602436 0.2799956 3.6647712 0.3830784 0.2756681
## 2 -0.3766805 -0.1326105 -0.8968664 -0.2723291 -0.2431916 1.5614700
## 3 -0.4124983 1.9281718 -1.0907891 -0.2237229 -1.1507665 0.5969852
## 4 1.8841530 -0.4872402 1.0205262 -0.2723291 1.0253723 -0.3279585
## 5 -0.3797266 -0.3458615 -0.3166917 -0.2723291 -0.4087995 -0.2997470
## 6 0.4622787 -0.4872402 1.1821100 -0.2723291 1.0714202 -0.5414749
## age dis rad tax ptratio black
## 1 0.37213224 -0.4033382 0.001081444 -0.0975633 -0.39245849 0.1715427
## 2 0.06875999 -0.2891339 -0.520091751 -0.8271402 -1.03318239 0.3543957
## 3 -1.40494992 1.5631737 -0.624570346 -0.5824419 -0.68600489 0.3520290
## 4 0.77543680 -0.8795287 1.603651939 1.4894004 0.74063793 -3.0225778
## 5 -0.15958722 0.1869013 -0.592013203 -0.5838115 0.08645442 0.2740981
## 6 0.81621486 -0.8344560 1.029393697 1.1774470 0.61868734 0.1328957
## lstat medv
## 1 -0.1643525 0.5733409
## 2 -0.9751323 1.6344033
## 3 -0.8985906 0.6488122
## 4 1.1687573 -1.0880779
## 5 -0.1153032 -0.1723628
## 6 0.8481491 -0.6401393
##
## Clustering vector:
## 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
## 2 5 2 2 2 5 5 5 5 5 5 5 5 5 5 5 5 5
## 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
## 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5
## 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
## 5 5 5 3 3 3 5 5 5 5 5 5 5 5 5 5 3 3
## 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72
## 3 3 3 3 3 5 5 5 5 3 3 3 3 5 5 5 5 5
## 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
## 5 5 5 5 5 5 5 5 3 5 3 5 5 5 5 5 2 2
## 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
## 5 5 5 5 5 5 5 2 2 2 5 5 5 5 5 5 5 5
## 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126
## 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 6 5 5
## 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144
## 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 1 6
## 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162
## 6 6 6 6 6 6 6 6 1 6 1 1 4 2 5 6 1 2
## 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180
## 1 1 5 5 2 5 5 5 5 5 5 5 5 2 5 5 2 2
## 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198
## 2 2 2 2 5 5 2 3 3 3 3 3 3 3 3 3 3 3
## 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216
## 3 3 3 3 3 3 3 5 5 5 1 1 1 1 1 5 5 5
## 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234
## 1 5 1 1 1 1 1 2 2 2 2 2 2 2 5 2 2 2
## 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252
## 1 5 1 2 3 3 3 5 3 3 5 5 3 5 3 3 3 3
## 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270
## 3 3 3 3 3 2 2 2 2 2 2 2 2 5 2 2 2 1
## 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288
## 5 5 5 1 1 3 1 1 3 2 2 2 1 3 3 3 3 3
## 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306
## 3 3 3 3 3 5 5 5 5 5 3 3 3 3 3 3 2 5
## 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324
## 2 2 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5
## 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342
## 5 5 5 5 5 5 5 3 3 5 5 5 5 5 5 5 5 3
## 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360
## 5 3 3 5 5 3 3 3 3 3 3 3 3 3 1 1 1 6
## 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378
## 6 6 6 1 1 6 6 4 6 1 1 6 1 6 6 6 6 6
## 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396
## 6 6 4 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6
## 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414
## 6 6 6 6 6 6 6 6 4 4 6 6 6 4 4 4 4 4
## 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432
## 4 4 4 4 4 4 6 6 6 4 4 4 4 4 4 4 4 4
## 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450
## 4 4 4 4 4 4 4 6 6 6 6 6 6 4 6 6 6 6
## 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468
## 4 6 6 6 4 4 4 4 6 6 6 6 6 6 6 6 4 6
## 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486
## 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6
## 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504
## 6 6 6 6 6 6 6 5 5 5 5 5 5 5 5 5 5 5
## 505 506
## 5 5
##
## Within cluster sum of squares by cluster:
## [1] 340.7321 205.1638 380.3152 359.7645 712.0145 725.2268
## (between_SS / total_SS = 61.5 %)
##
## Available components:
##
## [1] "cluster" "centers" "totss" "withinss"
## [5] "tot.withinss" "betweenss" "size" "iter"
## [9] "ifault"
Let’s use Elbow method to estimate number of clusters.
#Elbow Method for finding the optimal number of clusters
library(ggplot2)
set.seed(1234)
# Compute and plot wss for k = 2 to k = 15.
k.max <- 15
data <- boston_scaled1
wss <- sapply(1:k.max,
function(k){kmeans(data, k)$tot.withinss})
#wss
qplot(1:k.max, wss, geom = c("point", "line"), span = 0.2,
xlab="Number of clusters K",
ylab="Total within-clusters sum of squares")
## Warning: Ignoring unknown parameters: span
## Warning: Ignoring unknown parameters: span
library(NbClust)
nb <- NbClust(boston_scaled1, diss=NULL, distance = "euclidean",
min.nc=2, max.nc=5, method = "kmeans",
index = "all", alphaBeale = 0.1)
## *** : The Hubert index is a graphical method of determining the number of clusters.
## In the plot of Hubert index, we seek a significant knee that corresponds to a
## significant increase of the value of the measure i.e the significant peak in Hubert
## index second differences plot.
##
## *** : The D index is a graphical method of determining the number of clusters.
## In the plot of D index, we seek a significant knee (the significant peak in Dindex
## second differences plot) that corresponds to a significant increase of the value of
## the measure.
##
## *******************************************************************
## * Among all indices:
## * 12 proposed 2 as the best number of clusters
## * 6 proposed 3 as the best number of clusters
## * 3 proposed 4 as the best number of clusters
## * 3 proposed 5 as the best number of clusters
##
## ***** Conclusion *****
##
## * According to the majority rule, the best number of clusters is 2
##
##
## *******************************************************************
#hist(nb$Best.nc[1,], breaks = max(na.omit(nb$Best.nc[1,])))
Now, it’s much clearer that the data is described better with two clusters. With that, we run k-means algorithm again.
#Let us apply kmeans for k=4 clusters
km_final = kmeans(boston_scaled1, centers = 2) #we keep number of iter.max=15 to ensure the algorithm converges and nstart=50 to ensure that atleat 50 random sets are choosen
pairs(boston_scaled1[3:9], col=km_final$cluster)
More LDA
boston_scaled2<-as.data.frame(scale(Boston))
head(boston_scaled2)
## crim zn indus chas nox rm
## 1 -0.4193669 0.2845483 -1.2866362 -0.2723291 -0.1440749 0.4132629
## 2 -0.4169267 -0.4872402 -0.5927944 -0.2723291 -0.7395304 0.1940824
## 3 -0.4169290 -0.4872402 -0.5927944 -0.2723291 -0.7395304 1.2814456
## 4 -0.4163384 -0.4872402 -1.3055857 -0.2723291 -0.8344581 1.0152978
## 5 -0.4120741 -0.4872402 -1.3055857 -0.2723291 -0.8344581 1.2273620
## 6 -0.4166314 -0.4872402 -1.3055857 -0.2723291 -0.8344581 0.2068916
## age dis rad tax ptratio black
## 1 -0.1198948 0.140075 -0.9818712 -0.6659492 -1.4575580 0.4406159
## 2 0.3668034 0.556609 -0.8670245 -0.9863534 -0.3027945 0.4406159
## 3 -0.2655490 0.556609 -0.8670245 -0.9863534 -0.3027945 0.3960351
## 4 -0.8090878 1.076671 -0.7521778 -1.1050216 0.1129203 0.4157514
## 5 -0.5106743 1.076671 -0.7521778 -1.1050216 0.1129203 0.4406159
## 6 -0.3508100 1.076671 -0.7521778 -1.1050216 0.1129203 0.4101651
## lstat medv
## 1 -1.0744990 0.1595278
## 2 -0.4919525 -0.1014239
## 3 -1.2075324 1.3229375
## 4 -1.3601708 1.1815886
## 5 -1.0254866 1.4860323
## 6 -1.0422909 0.6705582
km_bs2<-kmeans(dist_eu, centers = 6)
#head(km_bs2)
myclust<-data.frame(km_bs2$cluster)
#myclust<-data.frame(km_bs2$cluster)
#myvar$boston_scaled2
myvar <- (myclust[,1])
#head(boston_scaled2)
lda.fit_bs2<-lda(myvar~., data = boston_scaled2 )
lda.fit_bs2
## Call:
## lda(myvar ~ ., data = boston_scaled2)
##
## Prior probabilities of groups:
## 1 2 3 4 5 6
## 0.09486166 0.12845850 0.19169960 0.10079051 0.28260870 0.20158103
##
## Group means:
## crim zn indus chas nox rm
## 1 -0.3613809 -0.094199770 -0.47408693 1.5321752 -0.12487357 1.27068222
## 2 1.4172264 -0.487240187 1.06980230 0.4545202 1.34622349 -0.73713928
## 3 0.4194955 -0.487240187 1.14884305 -0.2723291 1.00261960 -0.29765790
## 4 -0.4149170 2.555355046 -1.22875891 -0.1951310 -1.21919439 0.78676843
## 5 -0.4055469 0.003897921 -0.72907965 -0.2723291 -0.78741061 0.07918728
## 6 -0.3559855 -0.464960891 0.08535629 -0.2723291 -0.03907908 -0.34955733
## age dis rad tax ptratio black
## 1 0.2307707 -0.3386056 -0.4961654 -0.7220694 -1.1226766 0.32813467
## 2 0.8557425 -0.9615698 1.2885597 1.2934457 0.4142248 -1.68787016
## 3 0.7547263 -0.7860200 1.2120560 1.2873665 0.6695878 0.03134501
## 4 -1.4488239 1.7464736 -0.7048880 -0.5692695 -0.8353442 0.34924852
## 5 -0.7882694 0.6799862 -0.5650500 -0.7231257 -0.2171966 0.37615865
## 6 0.4578808 -0.3069523 -0.5956710 -0.4102890 0.3497601 0.18939480
## lstat medv
## 1 -0.6138415 1.4407282
## 2 1.1961180 -0.8078336
## 3 0.6821488 -0.6015814
## 4 -0.9773530 0.8760790
## 5 -0.5789506 0.2036280
## 6 0.1782669 -0.3146198
##
## Coefficients of linear discriminants:
## LD1 LD2 LD3 LD4 LD5
## crim 0.06957743 0.29237657 -0.53832201 -0.50922053 -0.067791043
## zn -0.15484693 1.78015330 0.38305361 0.16031762 0.961795193
## indus 0.63817045 0.02644569 0.57679400 0.40269875 -0.080856440
## chas 0.15409184 0.15860914 -0.94342441 0.14850242 -0.003366811
## nox 1.27080209 0.78173714 -0.09726017 -0.26707799 -0.011605310
## rm -0.17566254 0.01368536 -0.06240662 0.73991062 -0.046102532
## age 0.11968517 -0.34671159 -0.07431380 0.27586717 0.910139703
## dis -0.32603841 0.55023745 0.12193551 -0.19596274 -0.582218617
## rad 0.78479581 -0.31767787 0.29044329 0.68534345 -1.614227061
## tax 0.65835846 1.01312767 0.24467220 0.08770833 0.563552653
## ptratio 0.31452906 0.11222333 0.30050298 -0.25310031 0.665512300
## black -0.31394272 -0.28807751 0.73424011 0.90913948 -0.145380180
## lstat 0.48061737 0.41720098 -0.52532223 0.20625897 0.101125672
## medv 0.24483789 0.59335240 -0.85238032 0.03044767 0.143223684
##
## Proportion of trace:
## LD1 LD2 LD3 LD4 LD5
## 0.7243 0.1511 0.0744 0.0292 0.0211
# the function for lda biplot arrows
lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "red", tex = 0.75, choices = c(1,2)){
heads <- coef(x)
arrows(x0 = 0, y0 = 0,
x1 = myscale * heads[,choices[1]],
y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads)
text(myscale * heads[,choices], labels = row.names(heads),
cex = tex, col=color, pos=3)
}
plot(lda.fit_bs2, dimen = 2)
lda.arrows(lda.fit_bs2, myscale = 3)
Better ways to visualize LDA
library(plotly)
##
## Attaching package: 'plotly'
## The following object is masked from 'package:MASS':
##
## select
## The following object is masked from 'package:ggplot2':
##
## last_plot
## The following object is masked from 'package:stats':
##
## filter
## The following object is masked from 'package:graphics':
##
## layout
model_predictors <- dplyr::select(train, -crime)
# check the dimensions
dim(model_predictors)
## [1] 404 13
dim(lda.fit$scaling)
## [1] 13 3
# matrix multiplication
matrix_product <- as.matrix(model_predictors) %*% lda.fit$scaling
matrix_product <- as.data.frame(matrix_product)
plot_ly(x = matrix_product$LD1, y = matrix_product$LD2, z = matrix_product$LD3, type= 'scatter3d', mode='markers', color = train$crim)
Additional links (also included in the course slides)coming!